Easy2Siksha.com
GNDU QUESTION PAPERS 2022
BA/BSc 4
th
SEMESTER
QUANTITATIVE TECHNIQUES - IV
Time Allowed: 3 Hours Maximum Marks: 100
Note: Aempt Five quesons in all, selecng at least One queson from each secon. The
Fih queson may be aempted from any secon. All quesons carry equal marks.
SECTION–A
1.(a) Disnguish between paral and mulple correlaon with the help of hypothecal
examples.
(b) From the following data, calculate all the mulple correlaon coecients :






2.(a) What is modied exponenal curve? Discuss how the unknowns are esmated in
modied exponenal curve.
(b) Fit exponenal curve of type : 
to the following data :
Year :
1979
1980
1981
1982
1983
Prot : ('000 Rs.)
1.6
4.5
13.8
46.2
125.0
SECTION–B
3.(a) Dene probability.
(b) State mulplicave law of probability. How is the result modied when the events are
independent ?
(c) Suppose from a pack of 52 cards, one card is drawn at random. What is the probability
that it is either a king or a queen ?
4.(a) Dene random variable along with its types.
(b) Disnguish between probability density funcon and probability mass funcon.
Easy2Siksha.com
SECTION-C
5. Dene Poisson distribuon and state the condions under which this distribuon is
used. Also derive main properes of Poisson distribuon.
6. What is Binomial distribuon? Derive its important properes.
SECTION-D
7. Disnguish between a populaon and a sample. Discuss the relave merits of census
and sample methods of collecng data.
8.(a) What is an esmator? Also discuss some important properes of a good esmator.
(b) Explain the concept of standard error.
Easy2Siksha.com
GNDU ANSWER PAPERS 2022
BA/BSc 4
th
SEMESTER
QUANTITATIVE TECHNIQUES - IV
Time Allowed: 3 Hours Maximum Marks: 100
Note: Aempt Five quesons in all, selecng at least One queson from each secon. The
Fih queson may be aempted from any secon. All quesons carry equal marks.
SECTION–A
1.(a) Disnguish between paral and mulple correlaon with the help of hypothecal
examples.
(b) From the following data, calculate all the mulple correlaon coecients :






Ans: (a) Difference between Partial and Multiple Correlation
1. Multiple Correlation
Multiple correlation is used when we want to study how one dependent variable is jointly
related to two or more independent variables.
In very simple words:
Multiple correlation answers this question:
👉 “How well can two (or more) variables together explain or predict another variable?”
For example:
Suppose we want to study how a student’s Marks (X₁) depend on:
Study Hours (X₂)
Attendance (X₃)
Here, we are not looking at the relationship between marks and study hours alone, nor
between marks and attendance alone. We want to see their combined effect on marks.
This combined relationship is called Multiple Correlation, and is represented as:
R₁.₂₃
(read as: multiple correlation of variable 1 with variables 2 and 3)
Easy2Siksha.com
If R₁.₂₃ is high (close to 1), it means together Study Hours and Attendance strongly influence
Marks.
If it is low (close to 0), it means even together they do not explain Marks much.
2. Partial Correlation
Partial correlation is used when we want to study the relationship between two variables
after removing the effect of other variables.
In simple language:
👉 “How are two variables related when the influence of a third variable is kept constant or
eliminated?”
Example:
Again consider:
X₁ = Marks
X₂ = Study Hours
X₃ = Intelligence
Marks and study hours are naturally related. But intelligent students may:
understand quickly
study less but still score high
So sometimes intelligence may artificially increase the relationship between Marks and
Study Hours.
If we want to measure the pure relationship between Marks and Study Hours after
removing the effect of Intelligence, we use Partial Correlation.
This is denoted as:
r₁₂·₃
(read as: correlation between 1 and 2, keeping 3 constant)
So:
Multiple correlation = combined effect of several variables on one
Partial correlation = purified relationship between two variables after removing
the effect of others
Hypothetical Example to Make It Crystal Clear
Imagine you want to see why a plant grows better.
Easy2Siksha.com
Growth of plant (Y) depends on:
Water (X₁)
Fertilizer (X₂)
Sunlight (X₃)
Multiple Correlation Example
If you want to see:
“How do water and fertilizer together affect plant growth?”
That is multiple correlation:
Rᵧ.₁₂
Partial Correlation Example
Now suppose you want to see:
“What is the relationship between fertilizer and plant growth if the effect of water is
eliminated?”
That is partial correlation:
rᵧ₂·₁
So partial isolates, while multiple combines.
(b) Calculation of Multiple Correlation Coefficients
We are given:
r₁₂ = 0.5
r₁₃ = 0.6
r₂₃ = 0.7
We have to calculate all three multiple correlation coefficients:
R₁.₂₃
R₂.₁₃
R₃.₁₂
Formula for Multiple Correlation (for three variables)
Multiple correlation of X with X and X
Easy2Siksha.com








Substitute values:

󰇛
󰇜
󰇛󰇜
󰇛󰇜󰇛󰇜󰇛
󰇜
󰇛󰇜

  





 (approx)
Multiple correlation of X with X and X









  󰇛󰇜󰇛󰇜󰇛󰇜


 






Multiple correlation of X with X and X









  󰇛󰇜󰇛󰇜󰇛󰇜

Easy2Siksha.com

 






Final Answers






Simple Interpretation

→ Variables 2 and 3 together moderately explain variable 1.

→ Variables 1 and 3 together explain variable 2 a little better.

→ Variables 1 and 2 together have the strongest combined relationship
with variable 3.
So among the three, X₃ is best predicted when using X₁ and X₂ together.
Conclusion
Partial and multiple correlation are like powerful lenses:
Multiple correlation shows the combined effect of many variables on one.
Partial correlation removes the disturbance of other variables to show the pure
relation between two.
2.(a) What is modied exponenal curve? Discuss how the unknowns are esmated in
modied exponenal curve.
(b) Fit exponenal curve of type : 
to the following data :
Year :
1979
1980
1981
1982
1983
Prot : ('000 Rs.)
1.6
4.5
13.8
46.2
125.0
Easy2Siksha.com
Ans: Modified exponential curve and fitting an exponential trend
Concept overview
In many real-world processes (profits, populations, technology adoption), values grow or
decay by constant percentages rather than by constant amounts. An exponential curve
captures this behavior. A commonly used form is the “modified exponential” or
“exponential trend”:
Meaning: changes multiplicatively with . If , there is exponential growth; if
, there is exponential decay.
Why “modified”?: We often “modify” the curve for fitting by transforming it into a
linear form using logarithms, which lets us estimate the unknowns and with
ordinary least squares. Sometimes the origin of time is shifted (taking a middle-year
origin) to reduce numerical correlation, but the core idea is the same: turn a
multiplicative model into an additive one.
In practice, we estimate and by taking logs on both sides:
  
Let , , and . Then the model becomes the simple linear
regression

We can find and via least squares, and then recover
and
.
Estimating the unknowns in a modified exponential curve
To estimate and from data
󰇛
󰇜
:
1. Transform the data
o Take natural logs of the -values:
󰇛
󰇜
Keep
as-is, or shift the origin (e.g., set the first year to ) for convenience.
2. Fit the linear model 
o The least-squares estimators solve:

󰇛
󰇜󰇛
󰇜

󰇛
󰇜
and


where is the number of observations.
Easy2Siksha.com
3. Back-transform to get and
o Compute
,
.
o Your fitted curve is 
.
4. Check the fit (optional)
o Compare actual vs. fitted values to see how well the curve captures the
trend.
Worked example: Fit 
to the profit data
We are given:
Years: 1979, 1980, 1981, 1982, 1983
Profits (in ‘000 Rs.): 1.6, 4.5, 13.8, 46.2, 125.0
For easy computation, set the time origin at 1979 so that:
correspond to 19791983.
󰇟󰇠.
Step 1: Transform the -values
Compute (natural log):
1979 (x=0):
󰇛󰇜
1980 (x=1):
󰇛󰇜
1981 (x=2):
󰇛󰇜
1982 (x=3):
󰇛󰇜
1983 (x=4):
󰇛󰇜
Summations for least squares:
Sum of x: 

Sum of x²: 

Sum of Y: 
    
Sum of xY: 
   

Step 2: Estimate A and B
Use the least-squares formulas:

󰇛
󰇜󰇛
󰇜

󰇛
󰇜
  
 
 
 





  
 


Easy2Siksha.com
Step 3: Back-transform to get a and b

and


Therefore, the fitted exponential curve is:
 󰇛󰇜
Step 4: Sanity check (compare fitted vs actual)
x=0 (1979): vs actual 
x=1 (1980):  vs 
x=2 (1981):  
vs 
x=3 (1982): vs 
x=4 (1983): vs 
The fitted curve tracks the explosive growth very closely, with small deviations (expected
due to random fluctuations and measurement noise).
Putting it all together
(a) Modified exponential curve: Refers to fitting an exponential relationship
by “modifying” it into a linear form via logarithms:  . The
unknowns and are estimated using least squares on the transformed data, then
back-transformed.
(b) Fitted curve for the profit data: Using least squares on with , we
obtain:
o ,
o ,
o Final model:  󰇛󰇜
.
This model says profits increased by roughly a factor of each yearan approximately
year-on-year growth rateconsistent with an exponential boom.
SECTION–B
3.(a) Dene probability.
(b) State mulplicave law of probability. How is the result modied when the events are
independent ?
(c) Suppose from a pack of 52 cards, one card is drawn at random. What is the probability
that it is either a king or a queen ?
Easy2Siksha.com
Ans: (a) Define Probability Explained Simply
Imagine you and your friends are playing a game. You toss a coin and guess whether it will
be Head or Tail. Sometimes you win, sometimes you lose. But if someone asks you, “What
are the chances that the coin will show Head?” how do you answer in a mathematical way?
That is where probability comes in.
Probability simply means the measure of the chance of an event happening. It tells us how
likely or unlikely something is.
If something is sure to happen, like the sun rising tomorrow, its probability is 1.
If something is impossible, like a human growing wings and flying today, its probability is 0.
Everything else lies between 0 and 1.
Mathematically, probability is defined as:
Probability of an event = (Number of favourable outcomes) / (Total number of possible
outcomes)
Let us understand this with a small example.
If you throw a fair die, there are 6 possible outcomes (1, 2, 3, 4, 5, 6).
If you want the probability of getting a 4, only one outcome is favorable (4 itself).
So probability = 1 / 6.
So, in very simple words:
Probability is a numerical value that tells us how likely an event is to occur in the future.
(b) Multiplicative Law of Probability Explained in Simple Words
Sometimes we are interested in knowing the probability of two or more events happening
together. For example:
What is the probability that when I toss a coin and throw a die:
Coin shows Head
Die shows 6
Here, you want both events to happen together, not just one.
This is where the Multiplicative Law of Probability is used.
General Multiplicative Law
If A and B are two events, then the probability that both A and B happen is written as:
P(A ∩ B) = P(A) × P(B | A)
Easy2Siksha.com
This means:
First, the probability of event A happening,
Then, multiply it with the probability of B happening given that A has already
happened.
This second part is called conditional probability.
When Events are Independent
Now the question asks:
“How is this modified when events are independent?”
Let’s understand independence first.
Two events are said to be independent when the happening of one event does not affect
the happening of the other.
Examples:
Tossing a coin and rolling a die
Drawing a card and then tossing a coin
Rain falling and throwing a dice (completely unrelated!)
If events are independent, then the probability of B happening does not depend on A,
meaning:
P(B | A) = P(B)
Now put this into the original formula:
P(A ∩ B) = P(A) × P(B)
So, for independent events, the multiplicative law becomes very simple:
Probability of both independent events happening together = Product of their individual
probabilities
For example:
Probability of getting Head in a coin toss = 1/2
Probability of getting 6 on a die = 1/6
Both happening together = 1/2 × 1/6 = 1/12
That’s the modified multiplicative law.
Easy2Siksha.com
(c) Probability Question on Playing Cards
Now let’s solve the card problem in a very simple style.
We have:
A standard deck of 52 cards.
From this deck, one card is drawn at random.
We have to find the probability that the card is either a King or a Queen.
Let’s first recall what a standard deck of cards contains.
There are:
4 suits: Hearts, Diamonds, Clubs, Spades
Each suit has 13 cards
Among these 13 cards, there is:
1 King
1 Queen
So total:
Kings in deck = 4 (King of Hearts, Diamonds, Clubs, Spades)
Queens in deck = 4 (Queen of Hearts, Diamonds, Clubs, Spades)
We want:
Cards that are either King or Queen.
So number of favorable outcomes:
= Number of Kings + Number of Queens
= 4 + 4
= 8 cards
Total cards = 52
So probability:
Probability = Favourable outcomes / Total outcomes
Probability = 8 / 52
Now simplify:
8 ÷ 4 = 2
52 ÷ 4 = 13
So,
Probability = 2 / 13
Easy2Siksha.com
This means if you randomly pick a card, the chance that it will be either a King or Queen is 2
out of 13.
Let’s Summarize Everything Simply
1. Probability tells us how likely an event is.
Formula = Favourable outcomes / Total outcomes.
Its value lies between 0 and 1.
2. Multiplicative Law of Probability
o For any two events A and B:
P(A ∩ B) = P(A) × P(B | A)
3. If events are independent
o The formula becomes:
P(A ∩ B) = P(A) × P(B)
4. Card Question
o Kings = 4, Queens = 4 → total favorable = 8
o Total cards = 52
o Probability = 8 / 52 = 2 / 13
Final Thought
Probability is not just a mathematical topic; it is something we experience in daily life. When
we guess weather, play games, predict outcomes, or even make decisions, we unknowingly
use probability. Once you understand the basic idea, everything becomes logical and
interesting.
4.(a) Dene random variable along with its types.
(b) Disnguish between probability density funcon and probability mass funcon.
Ans: 🌟 Introduction
Probability and statistics often feel abstract, but they are the language of uncertainty.
Whether predicting the outcome of a dice roll, estimating rainfall, or analyzing exam scores,
we rely on random variables to represent uncertain outcomes numerically. Once we define
random variables, we use mathematical tools like probability mass functions (PMF) and
probability density functions (PDF) to describe how likely different outcomes are.
👉 In simple words: A random variable is a way of assigning numbers to uncertain events,
and PMF/PDF are ways of describing how those numbers are distributed.
🌟 (a) Random Variable: Definition and Types
Easy2Siksha.com
📖 Definition
A random variable is a variable whose possible values are numerical outcomes of a random
phenomenon. It is a function that maps outcomes of a probability experiment to numbers.
👉 Example: Tossing a coin. If we assign 0 to “Heads” and 1 to “Tails,” then the coin toss
outcome becomes a random variable.
🌟 Types of Random Variables
Random variables are broadly classified into two categories:
1. Discrete Random Variable
Takes countable values (finite or infinite but countable).
Examples:
o Number of goals scored in a football match (0, 1, 2, …).
o Number of students present in class.
o Outcome of rolling a die (16).
👉 Discrete random variables are like steps on a staircaseyou can count them one by one.
2. Continuous Random Variable
Takes uncountably infinite values within an interval.
Examples:
o Height of students in a class (can be 160.2 cm, 160.25 cm, etc.).
o Time taken to finish a race.
o Temperature at noon.
👉 Continuous random variables are like points on a lineyou cannot count them
individually because they form a continuum.
🌟 Key Differences Between Discrete and Continuous Random Variables
Feature
Discrete Random Variable
Values
Countable (finite or infinite)
Examples
Dice roll, number of children
Probability
Defined using PMF
Representation
Bar graph
🌟 (b) Probability Mass Function vs Probability Density Function
Once we know whether a random variable is discrete or continuous, we need a way to
describe its probabilities. That’s where PMF and PDF come in.
📊 Probability Mass Function (PMF)
Easy2Siksha.com
Definition
The probability mass function (PMF) describes the probability distribution of a discrete
random variable. It gives the probability that the random variable takes a specific value.
Mathematically:
󰇛󰇜󰇛󰇜
where 󰇛󰇜is the PMF.
Properties
1. 󰇛󰇜for all values of .
2. The sum of probabilities over all possible values equals 1:
󰇛󰇜
Example
Rolling a fair die:
󰇛󰇜

👉 PMF is like a bar chart showing the probability of each discrete outcome.
📈 Probability Density Function (PDF)
Definition
The probability density function (PDF) describes the probability distribution of a continuous
random variable. It gives the relative likelihood of the random variable taking values in a
range.
Mathematically:
󰇛󰇜 󰇛󰇜
where 󰇛󰇜is the PDF.
Properties
1. 󰇛󰇜for all .
2. The total area under the curve equals 1:
Easy2Siksha.com
󰇛󰇜

Example
If the time taken to finish a race follows a normal distribution, the PDF is the famous bell
curve. The probability of finishing between 10 and 12 minutes is the area under the curve
from 10 to 12.
👉 PDF is like a smooth curve where probabilities are represented by areas, not exact
points.
🌟 Key Differences Between PMF and PDF
Aspect
PMF
PDF
Random Variable
Type
Discrete
Continuous
Probability at a Point
Directly gives 󰇛
󰇜
Probability at exact point is 0; only
intervals matter
Representation
Bar graph
Smooth curve
Example
Dice roll probabilities
Normal distribution of heights
Mathematical Form
Summation over
values
Integration over intervals
📖 A Relatable Story
Imagine two friends, Riya and Arjun.
Riya loves board games. When she rolls a die, the outcome is discreteonly 1 to 6
are possible. The PMF tells her the probability of each number.
Arjun loves athletics. When he times his 100m sprint, the outcome is continuousit
could be 12.01 seconds, 12.02 seconds, or any value. The PDF tells him the
probability of finishing within a certain time range.
👉 Their story shows how PMF and PDF apply to everyday lifeone for countable
outcomes, the other for continuous measurements.
🌟 Critical Analysis
Random variables are the backbone of probability theory.
PMF and PDF are not interchangeablethey depend on whether the variable is
discrete or continuous.
PMF gives exact probabilities for discrete outcomes, while PDF gives densities that
must be integrated over intervals.
Both ensure that the total probability equals 1, maintaining consistency in
probability theory.
Easy2Siksha.com
📊 Summary Table
Concept
Definition
Example
Tool
Random
Variable
Maps outcomes to
numbers
Coin toss (0=Heads,
1=Tails)
Foundation
Discrete RV
Countable values
Dice roll
PMF
Continuous RV
Infinite values in interval
Height, time
PDF
PMF
Probability of exact value
󰇛󰇜
Bar chart
PDF
Probability over interval
󰇛󰇜
Area under
curve
🌍 Final Thoughts
Random variables transform uncertainty into numbers we can analyze. Discrete random
variables use PMF to describe exact probabilities, while continuous random variables use
PDF to describe probabilities over ranges. Together, they form the foundation of probability
and statistics, helping us make sense of randomness in games, science, and everyday life.
SECTION-C
5. Dene Poisson distribuon and state the condions under which this distribuon is
used. Also derive main properes of Poisson distribuon.
Ans: 🌟 What is Poisson Distribution?
Imagine you are standing at a bus stop and you start counting how many buses pass in one
hour. Sometimes 5 buses may pass, sometimes 7, sometimes only 3. The number keeps
changing, but it usually stays around some average value.
Or think about how many phone calls you receive in one hour, how many typing mistakes
you make on a page, how many accidents happen in a city in a day, or how many customers
enter a shop in 10 minutes.
In all these examples, we are counting the number of events happening in a fixed time,
area, or space.
This is exactly what Poisson distribution deals with.
So in simple words:
Poisson distribution is a probability distribution that gives the probability of a certain
number of events happening in a fixed interval of time or space, when these events occur
independently and randomly, but with a known average rate.
Easy2Siksha.com
Mathematically, the probability that exactly x events occur is given by:
󰇛󰇜

Where:
λ (lambda) is the average number of events
x is the number of events (0,1,2,3…)
e is a mathematical constant approximately equal to 2.71828
x! means factorial of x
But don’t worry about remembering the formula too deeply; understanding the idea is more
important.
Conditions for Using Poisson Distribution
We do not use Poisson distribution everywhere. It is used only when certain conditions are
satisfied. You can remember these as 4 simple rules:
The events should be rare or infrequent
Events should not happen continuously in bulk. For example:
Road accidents per day
Calls received per hour
Defective items in a batch
These are not extremely frequent, so Poisson works well.
Events occur independently
One event should not affect another. If one accident happens, it doesn’t force another
accident at the same time.
Events occur at a constant average rate
This means on average we know how many events happen.
For example:
On average, 3 emails per hour
On average, 5 cars cross a toll booth per minute
This average is the value of λ.
Easy2Siksha.com
Two events cannot occur exactly at the same instant
Events occur one by one, not in bunches.
If these conditions hold, Poisson distribution is the right choice.
🌍 Real Life Examples of Poisson Distribution
To make it clearer, let us see where Poisson distribution is commonly applied:
Number of typing errors per page
Number of emergency calls in a hospital per hour
Number of arrivals at a bank counter per minute
Number of defects in 100 meters of cloth
Number of earthquakes in a year
Whenever you hear “number of occurrences in a fixed time/area”, think of Poisson.
📌 Derivation Concept in Simple Language
(Poisson as a limit of Binomial Distribution)
You already know Binomial distribution deals with success or failure in repeated trials. But
what if:
the number of trials becomes extremely large (n is very big)
probability of success becomes extremely small (p is very small)
but the average number of successes stays finite
Then binomial distribution converts into Poisson distribution.
Mathematically,
If
,
,
but ,
Then:
󰇛󰇜

Easy2Siksha.com
So, Poisson is actually a “special case” of the binomial distribution for rare events.
🧠 Main Properties of Poisson Distribution
Now let us understand the important characteristics.
Mean of Poisson Distribution
The mean (average number of events) is:
Mean
This is the same λ used in the formula.
So if the average number of customers entering a shop per hour is 6, then:
Mean = 6
Variance of Poisson Distribution
A very special feature of Poisson distribution is:
Variance
This means:
Mean Variance
This rarely happens in other distributions. It is a unique hallmark of Poisson distribution.
So if λ = 4,
Mean = 4
Variance = 4
Standard deviation is:
Easy2Siksha.com
Shape of Poisson Distribution
The shape of the Poisson distribution depends on λ.
If λ is small (like 1 or 2), the distribution is highly skewed and peaked near zero.
If λ is moderate (like 5 or 6), the distribution becomes less skewed and smoother.
If λ is large, Poisson distribution actually starts looking like a Normal Distribution.
So as λ increases, the curve shifts right and becomes more symmetric.
Skewness and Kurtosis
Without going too deep into mathematics:
Skewness =
Means distribution is positively skewed, but skewness decreases as λ increases.
Kurtosis =
So as λ increases, distribution becomes flatter.
Additive Property
If two independent Poisson distributions exist with:
λ₁ and λ₂
Then their combined distribution is also Poisson with mean:
This is very useful in real life situations like merging two call centers, two traffic zones etc.
🎯 Why is Poisson Distribution Important?
Poisson distribution is loved in statistics because:
It is simple and realistic
It models rare events effectively
It helps planners, engineers, hospitals, industries, insurance companies and
researchers to predict chances of events happening.
For example:
Hospitals estimate expected emergency cases.
Easy2Siksha.com
Police departments estimate crime rates.
Industries estimate defects in production.
Conclusion
Poisson distribution is not something mysterious or frightening. It is simply a mathematical
way to describe how many times a random event happens in a fixed time or space,
provided the events are rare, independent, and occur at a constant rate.
You should remember three main things:
1. It is used for counting events in a fixed interval.
2. It requires certain conditions like independence and constant average rate.
3. Its biggest identity is:
Mean Variance
If you remember these ideas, Poisson distribution will always feel friendly and logical.
6. What is Binomial distribuon? Derive its important properes.
Ans: 🎲 Binomial Distribution and Its Properties
🌟 Introduction
Probability theory is all about understanding uncertainty. One of the most important
probability distributions is the Binomial Distribution. It describes situations where we
repeat an experiment multiple times, each trial having only two possible outcomessuccess
or failure.
👉 In simple words: The binomial distribution answers questions like “If I toss a coin 10
times, what is the probability of getting exactly 6 heads?”
This distribution is widely used in statistics, genetics, quality control, sports, and everyday
decision-making. Let’s explore its meaning, formula, and important properties step by step.
🌟 What is Binomial Distribution?
📖 Definition
Easy2Siksha.com
The Binomial Distribution is a discrete probability distribution that gives the probability of
obtaining exactly successes in independent trials, where each trial has the same
probability of success .
Mathematically:
󰇛󰇜󰇡
󰇢
󰇛 󰇜


Where:
= number of trials
= number of successes
= probability of success in each trial
= probability of failure

󰇛󰇜
is the binomial coefficient
🌟 Real-Life Examples
1. Tossing a coin times and counting heads.
2. Checking light bulbs, each with probability of being defective.
3. Shooting basketball free throws, with probability of scoring each shot.
👉 The binomial distribution is everywherewhenever we repeat a yes/no experiment
multiple times.
🌟 Derivation of Binomial Distribution
Suppose we perform independent trials. Each trial has:
Success probability =
Failure probability =
Now, what is the probability of getting exactly successes?
1. Probability of one specific sequence:
o Example: Success in first trials, failure in the rest.
o Probability =

.
2. Number of such sequences:
o Successes can occur in any of the trials.
o Number of ways =
.
3. Total probability:
󰇛󰇜󰇡
󰇢

This is the binomial probability formula.
Easy2Siksha.com
🌟 Important Properties of Binomial Distribution
Now let’s derive and explain its key properties.
1. Mean (Expected Value)
The mean of a binomial distribution is:
󰇛󰇜
Derivation:
Each trial has expected value = .
For independent trials, expected number of successes = .
👉 Example: Toss a coin 10 times (, ). Expected heads =  .
2. Variance
The variance of a binomial distribution is:
󰇛󰇜󰇛 󰇜
Derivation:
Variance of one Bernoulli trial = .
For independent trials, variance = .
👉 Example: Toss a coin 10 times. Variance =   .
3. Standard Deviation

This measures the spread of the distribution.
4. Shape of Distribution
If , distribution is symmetric.
If , distribution is skewed to the right.
If , distribution is skewed to the left.
👉 Example: In coin tosses, the distribution of heads is symmetric. But if a basketball player
has a 90% success rate, the distribution of misses is skewed.
5. Moment Generating Function (MGF)
The MGF of binomial distribution is:
Easy2Siksha.com
󰇛󰇜󰇛
󰇜
This function helps derive higher moments (mean, variance, etc.).
6. Additivity Property
If
󰇛
󰇜and
󰇛
󰇜, then:
󰇛
󰇜
👉 Meaning: Combining two binomial experiments with same success probability gives
another binomial distribution.
7. Limiting Cases
As and such that , the binomial distribution tends to the Poisson
distribution.
As becomes large and is moderate, the binomial distribution approximates the
Normal distribution (Central Limit Theorem).
👉 This makes binomial distribution a bridge between discrete and continuous probability
models.
📖 A Relatable Story
Imagine a cricket player practicing batting. Each ball bowled is a trial: either he hits (success)
or misses (failure). If he faces 20 balls with a 40% chance of hitting each, the binomial
distribution tells us the probability of scoring exactly 8 hits.
Mean =  .
Variance =   .
👉 This shows how binomial distribution predicts performance in repeated yes/no
experiments.
🌟 Applications of Binomial Distribution
1. Genetics: Probability of inheriting traits.
2. Business: Probability of defective items in a batch.
3. Sports: Predicting number of goals or hits.
4. Medicine: Success rate of treatments in trials.
5. Education: Probability of students passing an exam.
📊 Summary Table
Property
Formula
Meaning
PMF
󰇡
󰇢

Probability of successes
Easy2Siksha.com
Mean

Expected successes
Variance

Spread of distribution
Std. Dev.

Measure of variability
Shape
Symmetric if 
Skewed otherwise
MGF
󰇛

󰇜
Generates moments
Limiting Case
Poisson/Normal
Approximations
🌍 Final Thoughts
The Binomial Distribution is one of the cornerstones of probability theory. It models
repeated independent trials with two outcomes, giving us precise probabilities for different
numbers of successes. Its propertiesmean, variance, symmetry, and limiting behavior
make it versatile and powerful.
SECTION-D
7. Disnguish between a populaon and a sample. Discuss the relave merits of census
and sample methods of collecng data.
Ans: 🌍 What is Population?
In statistics, population does not mean people only. Instead, it means the entire group of
individuals, items, or observations that you are interested in studying.
If you want to study:
The height of all students in your college → The population = all students in the
college
The income of people in India → The population = all people living in India
The durability of bulbs manufactured by a company → The population = all bulbs
produced
So population is like a big container that holds every single unit related to your study. It may
consist of millions or even billions of units. Sometimes it is finite (like number of students in
a school), sometimes infinite (like number of stars, or the number of times you can toss a
coin theoretically).
👥 What is a Sample?
Now think practically.
Can you measure the height of every single student in a large university?
Can you go to every home in India to ask about income?
Easy2Siksha.com
Can a bulb company test all bulbs by switching them on until they fail?
If they do that, there will be no bulbs left to sell!
So, in many situations, studying the entire population is either:
Too expensive
Too timeconsuming
Physically impossible
That’s when we choose a sample.
A sample is a part of the population, a smaller group carefully selected to represent the
whole group.
For example:
Instead of measuring all 10,000 students, you may measure only 500 students.
Instead of checking every bulb, you test just a few hundred bulbs.
Instead of asking every person in India, you ask a few thousand people from
different regions.
If the sample is chosen properly, it can give almost the same results as the whole
population, but with far less effort.
So in simple words:
Population = The whole cake
Sample = One slice of the cake used to judge the taste of the whole cake
Difference Between Population and Sample (In Simple Words)
Population
Sample
Entire group under study
A part of the population
Large in size
Small in size
Costly and timeconsuming to study
Cheaper and faster
More accurate
Slightly less accurate but close enough
Example: All voters in India
Example: 2,000 selected voters
📊 Census Method vs Sample Method
Once we understand population and sample, we can now discuss two methods of data
collection:
Easy2Siksha.com
Census Method
Sample Method
Census Method (Complete Enumeration)
A census means collecting information from every single unit of the population.
The most famous example is:
The Population Census of India, conducted every 10 years, where data is collected
about every citizenage, gender, occupation, literacy, etc.
Other examples:
School collecting records of every student.
Government counting every household in a village.
Merits (Advantages) of Census Method
Highly Accurate and Reliable
Since every unit is studied, chances of error are very small. The information is very detailed
and trustworthy.
Provides Complete Information
You get full data about everyone, not just estimates. This helps in planning, policymaking,
and research.
Useful for Small Populations
If the population is small (like students in one classroom), census becomes easy and
practical.
Useful for Government Planning
National level programmes like employment schemes, health services, and education
planning require complete information.
Demerits (Limitations) of Census Method
Very Expensive
Huge staff, transportation, printing, supervisioneverything costs a lot.
TimeConsuming
Some censuses take years to complete and analyze.
Difficult to Use for Large Populations
When the population is extremely large, collecting data from everyone becomes very
difficult.
Easy2Siksha.com
Not Suitable When Testing Destroys Items
For example, if you want to test bulb life by burning them until they fail, you cannot test
every bulb!
🎯 Sample Method (Partial Enumeration)
In the sample method, data is collected only from a selected portion of the population. But
that portion is chosen scientifically so that it represents the whole population fairly.
Merits (Advantages) of Sample Method
Less Costly
Because fewer people or items are studied, it saves huge money.
Saves Time
Research can be completed quickly. This is especially useful when decisions must be taken
urgently.
Practical for Large Populations
Countries, industries, hospitalseverywhere the population is too large, so sample method
is perfect.
Sometimes More Accurate
Interestingly, wellplanned sampling can sometimes be more accurate than census because:
Better trained investigators can be used
Less pressure leads to fewer mistakes
Useful When Testing is Destructive
Like testing:
medicine effects
bulb life
quality of food products
You can test only a sample without destroying everything.
Demerits (Limitations) of Sample Method
Sampling Errors May Occur
If the sample is not properly selected, results may be misleading.
Bias May Occur
If the selection is careless or partial, the sample may not represent the population fairly.
Easy2Siksha.com
Not Suitable When Population is Very Small
If there are only a few units, it is better to study all rather than sample.
🎤 Final Understanding in Simple Words
Think of a doctor testing your blood. He does not take out all your blood. He only takes a
small samplebut that sample tells the whole story. Similarly, in statistics, when we cannot
study everyone, we choose a sample.
Population → Whole group
Sample → Part of the group representing the whole
Census Method → Study everyone
Sample Method → Study only a selected few
Conclusion
Understanding population and sample is the foundation of statistics. Census gives complete
and reliable data but is costly and time-consuming. Sampling is cheaper, quicker, and
practical, especially for large populations, but requires careful selection to avoid errors.
In real life, both methods are useful. Governments often use census, while researchers,
scientists, economists, and companies usually rely on sampling. The important thing is to
choose the right method according to the size, purpose, time, budget, and nature of the
study.
8.(a) What is an esmator? Also discuss some important properes of a good esmator.
(b) Explain the concept of standard error.
Ans: 📊 Estimators and Standard Error
🌟 Introduction
Statistics is all about making sense of data. Often, we don’t have access to the entire
population, so we rely on samples. From these samples, we try to estimate unknown
population parameters (like mean, variance, or proportion). The tools we use for this are
called estimators.
Easy2Siksha.com
👉 In simple words: An estimator is like a detective’s guess about the truth, based on clues
(sample data). But not all guesses are equally goodsome are more reliable, precise, and
fair. That’s why statisticians talk about the properties of a good estimator.
Alongside this, we also need to measure how much our estimates might vary from sample
to sample. This measure is called the standard error.
🌟 (a) What is an Estimator?
📖 Definition
An estimator is a statistical rule or formula that provides an estimate of a population
parameter based on sample data.
The population parameter is the true but unknown value (e.g., population mean ).
The estimator is the formula we apply to sample data (e.g., sample mean ).
The estimate is the actual numerical value we get after applying the estimator to a
specific sample.
👉 Example:
Parameter: True average height of all students in a university.
Estimator: Sample mean formula

.
Estimate: The calculated average height from a sample of 50 students.
🌟 Types of Estimators
1. Point Estimator
o Provides a single value as an estimate of the parameter.
o Example: Sample mean as an estimate of population mean .
2. Interval Estimator
o Provides a range (interval) within which the parameter is expected to lie, with
a certain confidence level.
o Example: Confidence interval for mean:
.
👉 Point estimators give us a “best guess,” while interval estimators give us a “safe range.”
🌟 Properties of a Good Estimator
Not all estimators are equally good. Statisticians use certain criteria to judge them.
1. Unbiasedness
An estimator is unbiased if its expected value equals the true parameter.
Mathematically: 󰇛
󰇜.
Example: The sample mean is an unbiased estimator of the population mean .
Easy2Siksha.com
👉 Unbiasedness means the estimator doesn’t systematically overestimate or
underestimate.
2. Consistency
An estimator is consistent if, as the sample size increases, it converges to the true
parameter.
Example: With larger samples, the sample mean gets closer to the population mean.
👉 Consistency means “the bigger the sample, the better the guess.”
3. Efficiency
Among unbiased estimators, the one with the smallest variance is considered more
efficient.
Example: If two estimators both estimate , the one with lower variability across
samples is preferred.
👉 Efficiency means “less scatter, more precision.”
4. Sufficiency
An estimator is sufficient if it uses all the information in the sample relevant to the
parameter.
Example: The sample mean is sufficient for estimating the population mean in a
normal distribution.
👉 Sufficiency means “no wasted information.”
5. Minimum Mean Squared Error (MSE)
MSE combines bias and variance:
󰇛
󰇜󰇛
󰇜 󰇟󰇛
󰇜󰇠
A good estimator has low MSE.
👉 MSE balances accuracy (bias) and precision (variance).
🌟 Summary of Properties
Property
Meaning
Example
Unbiasedness
Expected value = true parameter
Sample mean for population mean
Consistency
Improves with larger samples
Sample mean converges to
Efficiency
Smallest variance among
estimators
Preferred estimator
Easy2Siksha.com
Sufficiency
Uses all sample info
Sample mean in normal
distribution
Minimum
MSE
Low bias + low variance
Balanced estimator
🌟 (b) Concept of Standard Error
📖 Definition
The standard error (SE) measures the variability of an estimator across different samples. It
tells us how much the estimate is expected to fluctuate due to sampling.
👉 In simple words: Standard error is the “average error” we expect when using a sample
to estimate a population parameter.
🌟 Formula for Standard Error
1. Standard Error of the Mean (SEM):
󰇛󰇜
Where:
= population standard deviation
= sample size
2. Standard Error of Proportion:
󰇛󰇜
󰇛 󰇜
3. Standard Error of Difference Between Two Means:
󰇛
󰇜
🌟 Interpretation
Smaller SE → estimator is more precise.
Larger SE → estimator is less reliable.
SE decreases as sample size increases (because larger samples reduce variability).
👉 Example:
Easy2Siksha.com
If the average height of 50 students is 170 cm with SE = 2 cm, it means the sample
mean is expected to vary by about ±2 cm from the true population mean.
🌟 Importance of Standard Error
1. Measures Precision: SE tells us how precise our estimator is.
2. Basis for Confidence Intervals: Confidence intervals are built using SE.
o Example: .
3. Hypothesis Testing: SE is used to calculate test statistics (like t-tests and z-tests).
4. Comparison of Estimators: Lower SE indicates a better estimator.
📖 A Relatable Story
Imagine you’re trying to guess the average marks of students in a school. You take a sample
of 30 students and calculate the mean. Next day, you take another sample of 30 students
and calculate again. The two sample means differ slightly.
The formula you used (sample mean) is the estimator.
The actual numbers you got are estimates.
The variation between these sample means is captured by the standard error.
👉 This shows how estimators and standard error work together: one gives the guess, the
other tells us how reliable the guess is.
🌟 Critical Analysis
Estimators are the backbone of statistical inference.
A good estimator should be unbiased, consistent, efficient, sufficient, and have low
MSE.
Standard error complements estimators by quantifying their reliability.
Together, they allow us to make sound judgments about populations based on
samples.
📊 Summary Table
Concept
Definition
Example
Estimator
Formula to estimate parameter
Sample mean for
population mean
Estimate
Numerical value from sample
170 cm average height
Good Estimator
Properties
Unbiased, consistent, efficient,
sufficient, low MSE
Sample mean
Standard Error
Variability of estimator
SE of mean =
Importance
Precision, confidence intervals,
hypothesis testing
Smaller SE = more reliable
🌍 Final Thoughts
Easy2Siksha.com
Estimators and standard error are two pillars of statistical inference. Estimators give us the
tools to guess unknown population parameters, while standard error tells us how
trustworthy those guesses are.
This paper has been carefully prepared for educaonal purposes. If you noce any
mistakes or have suggesons, feel free to share your feedback.